Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer

نویسندگان

چکیده

To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. attacks [1], however, employing powerful deep learning techniques, had high success rates over common CAPTCHAs, including image-based text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples lately shown particularly impact DNN (Deep Neural Network) networks. The authors improved CAPTCHA security architecture by increasing resilience of when combined with Style Transfer. findings demonstrated that proposed approach considerably improves ordinary

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Manifold Regularized Deep Neural Networks using Adversarial Examples

Learning meaningful representations using deep neural networks involves designing efficient training schemes and well-structured networks. Currently, the method of stochastic gradient descent that has a momentum with dropout is one of the most popular training protocols. Based on that, more advanced methods (i.e., Maxout and Batch Normalization) have been proposed in recent years, but most stil...

متن کامل

Detecting Adversarial Examples via Neural Fingerprinting

Deep neural networks are vulnerable to adversarial examples, which dramatically alter model output using small input changes. We propose NeuralFingerprinting, a simple, yet effective method to detect adversarial examples by verifying whether model behavior is consistent with a set of secret fingerprints, inspired by the use of biometric and cryptographic signatures. The benefits of our method a...

متن کامل

Generating Adversarial Examples with Adversarial Networks

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires mor...

متن کامل

Stereoscopic Neural Style Transfer

This paper presents the first attempt at stereoscopic neural style transfer, which responds to the emerging demand for 3D movies or AR/VR. We start with a careful examination of applying existing monocular style transfer methods to left and right views of stereoscopic images separately. This reveals that the original disparity consistency cannot be well preserved in the final stylization result...

متن کامل

Demystifying Neural Style Transfer

Neural Style Transfer [Gatys et al., 2016] has recently demonstrated very exciting results which catches eyes in both academia and industry. Despite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this paper, we propose a novel interpretation of neural style transfer by treating it as a domain adaptation pro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2023

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2023.3298442